-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pkg/sql: expand metrics used by the SQL stats activity update job #120522
Conversation
It looks like your PR touches production code but doesn't add or edit any test code. Did you consider adding tests to your PR? 🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is dev-inf. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 3 of 5 files at r2, 1 of 1 files at r3, all commit messages.
Reviewable status: complete! 0 of 0 LGTMs obtained (waiting on @abarganier and @xinhaoz)
pkg/sql/sql_activity_update_job.go
line 140 at r3 (raw file):
Name: "sql.stats.activity_job.runs.failed", Help: "Number of runs made by the SQL activity updater job that failed with errors", Measurement: "failed runs",
Is it appropriate measurement? I assume it should be jobs
as it measures number of jobs. Metric name describes that it counts failed job executions. WDYT?
pkg/sql/sql_activity_update_job.go
line 147 at r3 (raw file):
Name: "sql.stats.activity_job.runs.successful", Help: "Number of successful runs made by the SQL activity updater job", Measurement: "successful runs",
Same as above.
15b3c4e
to
94736c3
Compare
I'm wondering if we should also track latency of unsuccessful runs too. WDYT? So this can just be latency and the job result can be viewed using the other 2 metrics. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: complete! 0 of 0 LGTMs obtained (waiting on @abarganier and @xinhaoz)
pkg/sql/sqlstats/persistedsqlstats/provider.go
line 206 at r5 (raw file):
}(); sigCh != nil { select { case sigCh <- struct{}{}:
Unrelated to main fix question but curious to clarify and submit separate issue if needed.
What is the chance to receive signal here if we have default
case?
I assume that flushing takes at least some time and default case will be called most of the time?
94736c3
to
67f45e2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the reviews! PTAL. Also note that I added a 4th commit, adding a new counter to track the # of unique stmt & txn fingerprints included in each flush. I expect this to be a useful metric as we continue working on fingerprinting improvements/cardinality reduction.
Reviewable status: complete! 0 of 0 LGTMs obtained (waiting on @koorosh and @xinhaoz)
pkg/sql/sql_activity_update_job.go
line 154 at r1 (raw file):
Previously, xinhaoz (Xin Hao Zhang) wrote…
I'm wondering if we should also track latency of unsuccessful runs too. WDYT? So this can just be latency and the job result can be viewed using the other 2 metrics.
Good idea! I've switched it to record the latency in either case.
pkg/sql/sql_activity_update_job.go
line 140 at r3 (raw file):
Previously, koorosh (Andrii Vorobiov) wrote…
Is it appropriate measurement? I assume it should be
jobs
as it measures number of jobs. Metric name describes that it counts failed job executions. WDYT?
As discussed on Slack, changed the terminology to updates
.
pkg/sql/sql_activity_update_job.go
line 147 at r3 (raw file):
Previously, koorosh (Andrii Vorobiov) wrote…
Same as above.
See above comment.
pkg/sql/sqlstats/persistedsqlstats/provider.go
line 206 at r5 (raw file):
Previously, koorosh (Andrii Vorobiov) wrote…
Unrelated to main fix question but curious to clarify and submit separate issue if needed.
What is the chance to receive signal here if we havedefault
case?
I assume that flushing takes at least some time and default case will be called most of the time?
When this was created, I believe the assumption was that the activity job would always be listening on the channel, which is why there was no default
case originally. As we've seen, that's not always the case, especially when a customer has a workload that generates high cardinality stats.
I don't think we have a great answer to your question yet, unfortunately. Hopefully these metrics help us answer that!
Addresses: cockroachdb#119779 Currently, the SQL activity update job is lacking observability. While we have a metric for job failures, we've seen instances whe the query run by the job gets caught in a retry loop, meaning the metric is rarely incremented. Therefore, additional metrics, such as counts of successful runs, and the latency of successful runs, will be helpful to further inspect the state of the job. This patch adds metrics for both. Release note (ops change): Two new metrics have been added to track the status of the SQL activity update job, which is used to pre-aggregate top K information within the SQL stats subsytem and write the results to `system.statement_activity` and `system.transaction_activity`. The new metrics are: - `sql.stats.activity.updates.successful`: Number of successful updates made by the SQL activity updater job. - `sql.stats.activity.update.latency`: The latency of updates made by the SQL activity updater job. Includes failed update attempts.
Addresses: cockroachdb#119779 We've had escalations recently involving the SQL activity update job running for extended periods of time, such that the signal made to the job indicating a flush has completed was not received because there was no listener. While we've added a default case to prevent this from hanging the flush job, and some logging to go with it, a counter metric indicating when this occurs would also be useful to have when debugging. This patch adds such a counter. Release note (ops change): A new counter metric, `sql.stats.flush.done_signals.ignored`, has been introduced. The metric tracks the number of times the SQL Stats activity update job ignored the signal sent to it indicating a flush has completed. This may indicate that the SQL Activity update job is taking longer than expected to complete.
The metric used to track failures of the SQL Activity update job didn't have a descriptive name, and the help text was grammatically incorrect. Furthermore, the metric name is the same as a metric used within the job system, meaning one of these metrics is probably clobbering the other when writing to TSDB or outputting to `/_status/vars`. This patch simply updates the metric name to better describe what it measures, and fixes the help text description. Release note (ops change): A new counter metric, `sql.stats.activity.updates.failed`, has been introduced to measure the number of update attempts made by the SQL activity updater job that failed with errors. The SQL activity update job is used to pre-aggregate top K information within the SQL stats subsystem and write the results to `system.statement_activity` and `system.transaction_activity`.
Addresses: cockroachdb#119779 The count of unique fingeprints flushed to `system.statement_statistics` and `system.transaction_statistics` is the core component that determines data cardinality within the SQL stats subsystem. Today, we don't have good metrics around this source of cardinality. As we aim to reduce cardinality by improving our fingerprinting algorithms, creating a metric to count the number of unique statement and transaction fingerprints included in each flush of the in-memory SQL stats will be a helpful measurement to benchmark cardinality reduction. This patch adds a new metric to track the # of unique fingerprints (stmt and txn) included in each flush. Release note (ops change): A new counter metric, `sql.stats.flush.fingerprint.count`, has been introduced. The metric tracks the number of unique statement and transaction fingerprints included in the SQL Stats flush.
67f45e2
to
099d0b0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 6 of 6 files at r6, 5 of 5 files at r7, 2 of 2 files at r8, 6 of 6 files at r9, all commit messages.
Reviewable status: complete! 1 of 0 LGTMs obtained (waiting on @abarganier and @xinhaoz)
pkg/sql/sqlstats/persistedsqlstats/provider.go
line 206 at r5 (raw file):
Previously, abarganier (Alex Barganier) wrote…
When this was created, I believe the assumption was that the activity job would always be listening on the channel, which is why there was no
default
case originally. As we've seen, that's not always the case, especially when a customer has a workload that generates high cardinality stats.I don't think we have a great answer to your question yet, unfortunately. Hopefully these metrics help us answer that!
👍🏼
TFTR! bors r=koorosh |
pkg/sql: expand metrics used by the SQL stats activity update job
Addresses: #119779
Epic: CRDB-24527
Currently, the SQL activity update job is lacking observability. While
we have a metric for job failures, we've seen instances whe the query
run by the job gets caught in a retry loop, meaning the metric is rarely
incremented.
Therefore, additional metrics, such as counts of successful runs, and
the latency of successful runs, will be helpful to further inspect the
state of the job.
This patch adds metrics for both.
We've also had escalations recently involving the SQL activity update job
running for extended periods of time, such that the signal made to the
job indicating a flush has completed was not received because there was
no listener.
While we've added a default case to prevent this from hanging the flush
job, and some logging to go with it, a counter metric indicating when
this occurs would also be useful to have when debugging.
This patch adds such a counter.
Finally, we rename the metric counting failures of the job to
sql.stats.activity_job.runs.failed
,as the old metric name was not descriptive.
Release note (ops change): Two new metrics have been added to track the
status of the SQL activity update job, which is used to pre-aggregate
top K information within the SQL stats subsytem and write the results to
system.statement_activity
andsystem.transaction_activity
.The new metrics are:
sql.stats.activity_job.runs.successful
: Number of successful runs madeby the SQL activity updater job
sql.stats.activity_job.latency
: The latency of successful runs made bythe SQL activity updater job
Release note (ops change): A new counter metric,
sql.stats.flush.done_signals_ignored
, has been introduced. The metrictracks the number of times the SQL Stats activity update job ignored
the signal sent to it indicating a flush has completed. This may
indicate that the SQL Activity update job is taking longer than expected
to complete.
Release note (ops change): A new counter metric,
sql.stats.activity_job.runs.failed
, has been introduced to measure thenumber of runs made by the SQL activity updater job that failed with
errors. The SQL activity update job is used to pre-aggregate top K
information within the SQL stats subsystem and write the results to
system.statement_activity
andsystem.transaction_activity
.